Current Issue : April-June Volume : 2025 Issue Number : 2 Articles : 5 Articles
With the recent development of computer graphics and image processing technology, this paper describes the progress of cloud image generation and processing techniques. Clouds in nature usually have complex and changeable shapes, so accurate and detailed processing of their images has always been an important research topic in computer graphics. First, this paper summarizes the application value of fractal theory in cloud image processing. Next, it presents the fractal generation and edge thinning algorithm as crucial technical components in forming the framework of cloud image detail processing. The fractal generation algorithm uses the fractal geometry principle to simulate the basic shape of the cloud. In contrast, the edge thinning algorithm enhances the realism of the cloud edge by adding small-scale features on the boundary....
As the demand for computational power increases drastically, traditional solutions to address those needs struggle to keep up. Consequently, there has been a proliferation of alternative computing paradigms aimed at tackling this disparity. Approximate Computing (AxC) has emerged as a modern way of improving speed, area efficiency, and energy consumption in error-resilient applications such as image processing or machine learning. The trade-off for these enhancements is the loss in accuracy. From a technology point of view, memristors have garnered significant attention due to their low power consumption and inherent non-volatility that makes them suitable for In-Memory Computation (IMC). Another computing paradigm that has risen to tackle the aforementioned disparity between the demand growth and performance improvement. In this work, we leverage a memristive stateful in-memory logic, namely Material Implication (IMPLY). We investigate advanced adder topologies within the context of AxC, aiming to combine the strengths of both of these novel computing paradigms.We present two approximated algorithms for each IMPLY based adder topology. When embedded in an Ripple Carry Adder (RCA), they reduce the number of steps by 6% − 54% and the energy consumption by 7%−54% compared to the corresponding exact full adders. We compare our work to State-of-the-Art (SoA) approximations at circuit-level, which improves the speed and energy efficiency by up to 72% and 34%, while lowering the Normalized Median Error Distance (NMED) by up to 81%. We evaluate our adders in four common image processing applications, for which we introduce two new test datasets as well. When applied to image processing, our proposed adders can reduce the number of steps by up to 60% and the energy consumption by up to 57%, while also improving the quality metrics over the SoA in most cases....
Coal bed methane is one of the clean energy sources in the world. Methane molecules are confined inside the pores of coals and when the gas drainage wells are drilled into the coal seams, due to the resulted pressure difference. Coal cleats are spread all over the coal seam as face and butt cleats and play an essential role in the methane gas drainage operation from coal mines. Calculating the coal cleats spacing makes great help in modeling the amount of emitted gas followed by calculating the spacing of gas suction wells. Furthermore, the time required for gas drainage will be determined. Computer-based image processing technique has been utilized to identify the cleats spacing. In this study, three-dimensional computed tomography scan images were prepared for coal samples from Tabas Coal Mine in Iran. Computer-based image processing technology was conducted to determine the coal cleats. Their spacing was then calculated. Based on the results obtained using image processing with computer, the average distance between face cleats was 20 to 30 mm and the distance between butt cleats was 15 to 25 mm. The results of this study are used to modeling the amount of methane gas in the coal cleats. Based on this modeling, the amount of released gas and the distance between the wells can be designed....
Objective ‒ With the popularity of high-resolution devices such as high-definition, ultra-high-definition televisions, and smartphones, the demand for high-resolution images is also increasing, which puts forward higher requirements for high-resolution image processing and entity recognition technology. Method ‒ This article introduced the research progress and application of high-resolution image processing and entity recognition algorithms from the perspective of artificial intelligence (AI). First, the important role of AI in high-resolution image processing and entity recognition was introduced, and then the applications of deep learning-based algorithms in high-resolution image grayscale equalization, denoising, and deblurring were introduced. Subsequently, the application of AI-based object detection and image segmentation algorithms in entity recognition was explored, and the superiority of AI-based high-resolution image processing and entity recognition algorithms was verified through training and testing. The accuracy of the model was verified through testing experiments. Finally, a summary and outlook were made on high-resolution image processing and entity recognition algorithms based on AI. Result ‒ After experimental testing, it was found that high-resolution image processing and entity recognition based on AI had higher efficiency, and the overall image recognition ability was improved by 29.6% compared to traditional image recognition models. The recognition speed and accuracy were also improved. Conclusion ‒ High-resolution image processing and element recognition algorithms based on AI enabled observers to see the detailed information in the image more clearly, thus improving the efficiency and accuracy of image analysis. Through continuous improvement of algorithm performance, real-time application, and expansion of cross-disciplinary applications, people can look forward to the development of more advanced and powerful image processing and entity recognition technologies, which will bring huge impetus to research and application in various fields....
The computer-assisted inverse design of photonic computing, especially by leveraging artificial intelligence algorithms, offers great convenience to accelerate the speed of development and improve calculation accuracy. However, traditional thickness-based modulation methods are hindered by large volume and difficult fabrication process, making it hard to meet the data-driven requirements of flexible light modulation. Here, we propose a diffractive deep neural network (D2NN) framework based on a three-layer all-dielectric phased transmitarray as hidden layers, which can perform the classification of handwritten digits. By tailoring the radius of a silicon nanodisk of a meta-atom, the metasurface can realize the phase profile calculated by D2NN and maintain a relative high transmittance of 0.9 at a wavelength of 600 nm. The designed image classifier consists of three layers of phase-only metasurfaces, each of which contains 1024 units, mimicking a fully connected neural network through the diffraction of light fields. The classification task of handwriting digits from the ‘0’ to ‘5’ dataset is verified, with an accuracy of over 90% on the blind test dataset, as well as demonstrated by the full-wave simulation. Furthermore, the performance of the more complex animal image classification task is also validated by increasing the number of neurons to enhance the connectivity of the neural network. This study may provide a possible solution for practical applications such as biomedical detection, image processing, and machine vision based on all-optical computing....
Loading....